Gabrielle Sierra - Editorial Director and Producer
Molly McAnany - Associate Podcast Producer
Markus Zakaria - Audio Producer and Sound Designer
-
-
Mark HorowitzChair of the Electrical Engineering Department, Stanford University
Transcript
Martin GILES: Welcome to The Interconnect, a new podcast series from the Council on Foreign Relations and the Stanford Emerging Technology Review. Each episode brings together experts from critical fields of emerging technology to explore recent groundbreaking developments, what's coming over the horizon, and the implications for American innovation leadership interconnect with the fast-changing geopolitical environment. I'm Martin Giles, and I'm the Managing Editor of the Stanford Emerging Technology Review. In this episode, we'll be focusing on semiconductors and computing.
Mark HOROWITZ: There are certain classes of problems that seem like the quantum computer could be much better but how broad that space is, I think, is currently unknown.
Sebastian ELBAUM: What happens when you don’t have access to the latest Nvidia hardware? That’s, I think, where DeepSeek comes in.
GILES: Joining me to talk about these key domains are Mark Horowitz, a member of the Review's Faculty Council and chair of the Electrical Engineering Department at Stanford University, and Sebastian Elbaum, the Technologist in Residence at the Council on Foreign Relations and professor of computer science at the University of Virginia. Thank you both for being with us today.
HOROWITZ: Thank you very much, Martin.
ELBAUM: Thank you, Martin.
GILES: I'd like to start by looking at what's happening with the phenomenon known as Moore's Law, coined by Intel co-founder, Gordon Moore, in 1965. Now, this law holds that roughly every couple of years, a chip that costs the same will have double the number of transistors on it, boosting its processing power. And this scaling has been so consistent that we've come to expect the cost of computing will keep decreasing over time, or at least not significantly increase. But there are some signs that this trend may be coming to an end if it isn't already over. Mark, is that right? And what's happening here?
HOROWITZ: The situation with Moore's Law is a little bit complicated because many people associate many different scaling trends and call it Moore's Law. So what Gordon Moore really said back in 1965 was that the number of transistors that were the most economical to produce on an integrated circuit seemed to be increasing exponentially. That is, every few years you would get twice the number of transistors that you had before. And the key point that he made was that this made those transistors cheaper because with roughly the same area, you got many more transistors and the cost of producing that area didn't grow that rapidly, so you got more transistors for the same price. And this was phenomenal because it meant that we could expect increasing computation or increasing data storage all for relatively constant dollars, which is awesome. So what has happened over time is that we have continued to scale technology through many, many orders of magnitude, factors of 10, and this law has held. Unfortunately, while we're still being able to increase the number of transistors, the cost scaling that was associated with that increasing number of transistors has come to an end or is dramatically changed. So in the recent technologies, even though we can increase density, the cost per transistor has not been falling, some claim it rises a little bit. I'm not a fab, I don't know the exact numbers, but it's certainly not decreasing exponentially like it used to.
GILES: Got it. When you say a fab, a fab refers to a fabrication plant, is that right?
HOROWITZ: A fab in the lingo is basically a very expensive building that fabricates integrated circuits and the cost of building one of these facilities is many billions of dollars today.
GILES: Got it. Sebastian, how do you think about Moore’s law and do you agree with what Mark just said? I mean, does it look like it's coming to an end?
ELBAUM: Yes, I think so. I think so. I just find it fascinating that someone made an observation like this 60 years ago and it still stands right. I mean, I'm amazed by that fact alone. In a sense, it is interesting because this is really an observation about scientific or engineering progress, it's not about physical progress, but lasting this long in technology, it's pretty amazing. The other thing that it makes me think about is that sometimes this Moore's Law has been not just a prediction of what will happen, but from the other side, it has actually set the roadmaps of progress or expectation of progress for some of the industry that it was expected that every few years, we will see some really major gain in performance or in cost. So in a sense, it plays two roles. It is one as a predictor, but it has also shaped how the expectations of some of these companies were lining up. Right now it feels like we're really hitting some physical limits of what the law is predicting. I mean, it's really hard to come up with a wire that is thinner than an atom, right? So as we're trying to approximate that limit, we're facing physical limitations that cannot be easily overcome and the cost to get close to that limit increases.
GILES: And so we kind of had a yin and yang thing between hardware and software. You see advances in the hardware drive advances in software. So we get more capable smartphones, smarter cars, and the price doesn't increase dramatically from generation to generation. Is that going to stop? Are we all going to end up paying now a lot more for electronics? Is the military going to pay a lot more for capable weapons because of this scaling coming to an end?
HOROWITZ: So what I would say is that as the cost of computation fell, it became economically viable or you had an incentive to move things into the computation domain if you could because it was cheaper than the mechanical or other alternative. So why do cars have so much electronics in them? Well, for two reasons. First of all, it made the engines more efficient, made the brakes safer with the anti-lock braking system. You could build things using electronics cheaper than you could build them in some mechanical analog. And the other reason that cars have so much electronics is that you need to build a better product next year than the product you had previously. And it was easier to add features in terms of navigation systems or other user comforts into the car using the electronics because that gave you increased capability at constant dollars. So that had really motivated a big movement into the information domain because it was the thing that was improving the quickest from an economic perspective. So the first thing that we're going to see is additional effort in design and other things to be a little bit more efficient in how you use the computing because the cost is not decreasing. And what you'll find is for applications that are in high demand, more effort will be put in trying to optimize the entire stack from the application through the system's services down to the hardware because we can't rely on the hardware basically getting cheaper.
GILES: Got it.
HOROWITZ: And this has already happened and this decrease in cost scaling has been around for many years already, and that's what the large system vendors have had to do.
ELBAUM: Yeah, yeah, I'm in total agreement. I see the shift that Mark has been referring to. I mean, we have seen that for a few years already, and I think part of the responsibility when Mark mentions the development stack where you have several layers at the bottom, you have the hardware layers as you move to the top, you have more software layers. I think there's a lot of space on the upper layers to make up for things that we took for granted just because we had the computing power underneath that was very generous and we didn't have to pay attention to things that we may have to in the future in order to keep the overall system cost constant.
GILES: All this is happening at the same time that artificial intelligence is exploding. In January, we saw Chinese startup DeepSeek roll out an AI model that uses lower-cost chips and less data than major models but still produces impressive results. Sebastian, what's the interplay here between what we've just been talking about in terms of the end of Moor’s Law and what’s happening in AI?
ELBAUM: Yeah, I think there's a constant push and pull here, right? On one hand the amazing advances in AI from the self-driving vehicles that use AI to perceive the world around them and to operate in very constrained scenarios, to the large language models like the ChatGPTs that we have seen in the last couple of years. They could not have happened without the high performance GPUs, the graphical processing units, that we have had access to. Now, what happens when you do not have access to the latest NVIDIA hardware αnd you still want to build these large and powerful models? That's, I think, where DeepSeek comes in. DeepSeek is a Chinese company that released a couple of really interesting models, V3 December, and then R1. But I think what is impressive about them is that they offer the performance comparable to the models of U.S. companies, but with training costs that are about an order of magnitude lower. And that's fairly significant. Now, the paradox here is that some of the gains in cost that the Chinese models have, have been caused by the constraints under which those models were developed. So the Chinese engineers did not have access to the latest and greatest NVIDIA chips, so they had to make up for it. And in a sense, these constraints led them to really clever optimizations across the board—the hardware, how they managed to do low-level programming on their processors, their ability to train in specific ways, and the way that they actually build this model architecture.
HOROWITZ: Yeah, and to add if you prevent a company from using outside equipment, they're going, and it's economically imperative for them to have this stuff, they're going to end up basically building their own, and that's going to enable them in-house to be better at doing what they need to do because necessity is the mother of invention, as they say.
GILES: That’s a good point, Mark, and we saw DeepSeek’s innovation create a significant drop in the stock market, although it’s since bounced back but it wipes hundreds of billions of dollars off of Nvidia’s stock market value—Nvidia the U.S. chip giant. Sebastian, how lasting do you think this change is gonna be?
ELBAUM: Well, the disruption in the market was pretty obvious to everyone. But NVIDIA has operated under the assumption that models are going to grow in size and complexity, and you need more of that. And basically, that underlying assumption has been challenged with the emergence of DeepSeek.
GILES: Okay. What we've been talking about so far is classical computing, but there's also a lot of work going on in the field of quantum computing, which seeks to harness these almost mystical phenomena from quantum physics to create immensely powerful computers. Ya know, Sebastian, there's been a lot of hype and hope around quantum and its future potential. Looking ahead, over the horizon, what do you think we'll see here?
ELBAUM: Well, on one hand, I don't know, it's hard not to be seduced by something that is so sexy as subatomic particles solving hard large problems. And my inner nerd just gets excited about that no matter what you do. But I can see it particularly appealing for large problems that can be parallelized. I mean, that's exactly the problem space that these types of machines can solve. Currently, in spite of the successes that are reported recently by Google and so forth, they still work at scales that are not commercially appealing. Even the benchmarks that they have, they don't have the business element in them yet. So you take it with a grain of salt. But I think that a lot of things are converging in favor of this technology. You see a lot of really good researchers working on that. You see a lot of investment. You see companies having fabrication facilities already producing some chips with this technology. So I wouldn't discard that we're going to see it. I am not good at predicting technology. I mean, I didn't see that we would go from GPT 2.0, that was kind of a preschooler level to GPT 4, which is a high schooler level technology. I mean, if quantum can imitate that rate of improvement in four years, then we're going to see some amazing things. But I think we need to be cautious and understand that where the technology is today and the problems that it is solving technically are hard, they're fundamental, and it's not at the point yet that is a technology that can be easily transitioned into a business problem.
GILES: Great. And Mark, what’s your view of quantum’s prospects?
HOROWITZ: I have dabbled in quantum computing and I've been tracking it, and I'll say that it's a very interesting space because it fundamentally changes some assumptions we have about the complexity or how hard it is to solve some problems. Having said that, I think the hype around quantum computing is maybe a little frothy. Jensen Wang in his Nvidia thing said something about not being available or something to think about in another decade or so.
GILES: Jensen being the CEO of Nvidia. Yeah.
HOROWITZ: Yes. You know, I think he's probably right. I think that there's high uncertainty in the quantum space. The one thing I know for certain is it's not a panacea. Many problems can't be computed faster on a quantum computer than on a conventional computer. There are certain classes of problems that seem like the quantum computer could be much better, but how broad that space is, I think it's currently unknown.
GILES: What are some of those examples? Mark, can you just cite a couple very quickly? I mean, what would they be?
HOROWITZ: Yeah, so the example that is the canonic thing is basically Short's Algorithm, which is a way of factoring numbers or calculating certain problems that are very hard, that are the basis of many of the crypto systems that we use today.
GILES: So encryption, right...
HOROWITZ: And so that's the reason people say a quantum computer could break all our security, but a quantum computer to do that would need to be a very advanced quantum computer. We're nowhere close to the scale of that quantum computer yet.
GILES: Got it. Got it.
HOROWITZ: People are working on things better, but that's still.
GILES: I mean, I've seen it so maybe it would be very useful for things like designing new drugs, proteins, and maybe tracking navigation systems, helping us get faster to where we want to go, which I would love. But we’re still too far out.
HOROWITZ: So those are a little bit more speculative. So the new drugs have to do with using quantum chemistry, which is chemical reactions that are basically atomic reactions between different molecules. If we can simulate that more accurately, we should be able to do better stuff. But again, the scale of computer that you would need to do those calculations from the people I've talked to is still quite, quite advanced, nothing close to what we're doing right now. In the area of optimization, there's a lot of discussion about how it might be better for optimization problems, but again, that is still to be determined. It seems promising, but there isn't a...
GILES: Got it. Okay. Let’s turn now to the interconnection between semiconductor tech and geopolitics. The United States has taken steps to restrict the export to China of advanced chips, especially the AI ones that we were just discussing and chip making equipment. But doesn’t the rise of DeepSeek suggest that these restrictions have been ineffective? Sebastian, what do you think?
ELBAUM: It's hard to judge whether they have been ineffective because there may have been other many companies that have been slowed down significantly by the lack of chips. But really, DeepSeek also raises questions about the effectiveness of any type of government control on the distribution of chips. On one hand, you could say, "Well, look, we just need more controls." The controls came too late and we really need to control the chips because the software and the model's advantage seems to be eroding given the DeepSeek model's performance. But on the other hand, the constraints that are imposed by those government controls led to the innovation. They planted the seeds that led to the creation of DeepSeek. So that's why we are in this conundrum. The U.S. has been trying to balance this very tricky balance where you provide enough access to hardware and chips to remain the leading country in actually defusing the technology, but at the same time, keep them one state behind so you can keep your edge. I think what DeepSeek has shown, that that's a very, very hard balance to achieve, and it can be disrupted fairly easily, as they have shown.
HOROWITZ: The other downside, is the fact that the United States has been the benefit of huge influx of very talented people to come in, study at American universities, and then do great things. And I fear that in the shutting down of international trade and suspicion about countries, that we will disincentivize these bright people from coming to the United States, and that I think is directly shooting ourselves in the foot. And I think that’s not good. If you look at many of the startups, Nvidia in particular, were formed by people who came to the United States.
ELBAUML: And I wouldn't be surprised now if DeepSeek is going to be like, maybe it already is, the star company in China for the next few months, and it's going to attract a lot of talent. So the paper said that before R1, there was this other model called V3. The paper that described that model had more than 140 authors, and those are star engineers. After publishing that paper, after releasing R1, my guess is that this company is going to have access to hundreds and hundreds of top-notch talent joining in the ranks.
GILES: And I noticed there’s industry associations in the chip manufacturing world saying, "Look, we are looking ahead and we are seeing a scenario in which there will be many, many thousands of hardware jobs not filled. We just don't have the talent pipeline domestically." What should we be doing more to address that human talent issue? It's not just about the hardware and the software, it's about the humans.
ELBAUM: Yeah, absolutely. I mean, I think restricting or closing the door to talent from abroad is problematic at several levels. First of all, like Mark said, we need that talent. We have the CHIPS Act that has said, look, we're going to do fabrication and design and a lot of things at home, and we're going to invest money to do it, but where are the engineers going to be to actually do the designs of those chips? These are not jobs that you're going to get people displaced from other jobs in other areas and bring them to produce the hardware or the AI models. You need specialized people with a lot of training. So in a sense, I feel even further than Mark expressed, but I think we're going to be in agreement. It is like you cannot stop the current pipelines, but that's not enough. We actually need to increase them in order to fulfill the gaps that we're going to have in the next few years, both in the hardware and in the models that we're going to be creating.
GILES: Got it. Sebastian, you mentioned the CHIPS Act. So CHIPS is an acronym for Creating Helpful Incentives to Produce Semiconductors, and that allocated up to $53 billion for investing in semiconductor manufacturing in the U.S. We are concerned because today a lot of the manufacturing of advanced chips is done in Taiwan, and obviously there are geopolitical risks around that situation. So the idea of the CHIPS Act was to re-onshore some of that manufacturing. I gathered that about $33 billion in grants and $5.5 billion in loans have already been awarded across 48 projects in 23 states. But is that enough? Do we need more? Do we need to move faster? Mark, what do you think?
HOROWITZ: I would say that there are many aspects of integrated circuit design. The investment so far has been mostly in on-shore manufacturing and that makes sense from the geopolitical risks that you've mentioned. I think there are many other aspects in terms of the design and how do you get interesting products to go through those manufacturing facilities because a fab is just a very expensive thing that costs a lot of money. In order to make money, you have to build product in it that people want. So I do think that having design talent and ideas for building interesting new electronic systems is very important. And I think that that also is an area that needs some thought and investment.
ELBAUM: One of the things that we do when we are researchers, we measure things. That's the only way that we can assess if we're moving forward or not. And one of the things that a lot of these acts have is that they have concrete amount of funding that they're going to give. And I think like you're saying, this one has given most of it in 2024, but the targets are not clear. Where do we want to be in terms of securing production of chips in 5 years or in 10 years? The U.S. in the nineties had almost half of the production of chips in the world. Today, it has close to 10, and most of those are not the most advanced ones. So do we want to go back to the nineties? Where are we setting in the bar? And I think to me, that's something that needs to be part of the discussion because that's going to drive the answer to your question, are we doing enough? Are we doing fast enough? Well, what is the goal? And that has not been stated as clearly in any of these acts or executive orders.
GILES: Got it. And one last dimension I wanted to talk about, one last strategic dimension I wanted to talk about is the issue of energy. It's great having advanced chips. It's great having wonderful computers, but we got to power them. And I was struck last year when Microsoft announced that it was part of an effort to restart the Three Mile Island Nuclear plant. It has signed a 20-year deal to take energy from there to power its data centers. How should America be thinking about its energy strategy in relation to advanced compute and what we are trying to do with semiconductors in the future?
HOROWITZ: I think that in the clamour for machine learning and AI, people are talking about just building these enormous systems. I don't think that that's sustainable. While communities might want a data center for jobs, not very many communities want to build power plants. And while the data center companies may be interested in nuclear power, that technology, that industry has extremely long lead times. So I don't expect nuclear power in the next decade is going to really make a big difference. So for the decade that's coming up, we have to deal with the energy infrastructure we have, and power will constrain the amount of computing we can do. It used to be, as part of Moore's Law, computation got more energy efficient as well. So as you scaled, you got not only more computation for the same cost, you got that computation for nearly the same power. So it's great. You've got more for the same cost and energy. The power thing has changed a long time ago, we lost that, and that fundamentally will limit the computation that we can do. And so we will need to build more efficient algorithms because I do not think if you just extrapolate, you'll extrapolate that the data centers will take 50 percent of the power at some point. I don't think that's going to happen, just economically. I don't think it's viable. And so something else is going to give, and we're not going to scale the computation as fast as people say.
GILES: That's great. We started with Moore's Law, we ended with Moore's Law. Perfect. We have about a minute left, and I just want to do a quick lightning round of questions. If you had to pick one person from computing history to have dinner with, who would it be and why? Sebastian?
ELBAUM: This is lightning around and I feel like I'm not lighting. I’m not lightning. Mark, do you want to go first?
HOROWITZ: Yeah, I'll do Turing. Alan Turing.
GILES: Alan Turing, and why?
Turing was the person who was a crypto breaker, did early work on theory of computation, and just a really interesting character. I’d love to just chat with him and understand a little bit more about what he thinks about.
GILES: Got it. Sebastian, any thoughts?
ELBAUM: Well, I think there are a lot of Turing awards that I would like to meet. Among the latest round, just starting from the most recent. I think if I could meet Hasabi and Hinton on AI, I think that would be a pleasure.
GILES: So Demis Hasabi and Jeff Hinton who just won the Nobel Chemistry and Physics Prizes for work involving AI, right?
ELBAUM: That's right, that's right.
GILES: Fabulous. What's one useful app you'd really love to see created that you don't have on your phone right now?
HOROWITZ: The most useful app I would like is an app that tells me when I'm going to really make a really stupid mistake that I do often and just warns me a little bit ahead of time. I don't know how to create that app, but I'd love to have it.
ELBAUM: A personal error predictor, that's what Mark is talking about.
GILES: Personal error predictor. Awesome. That's great. Unlimited storage or unlimited processing power, which could you choose and no, you can't coordinate to share? Sebastian?
ELBAUM: Processing power, definitely. And yeah, Mark.
HOROWITZ: Yeah, I would do processing power as well.
GILES: Wow. Okay. I'm going to have all the storage. Unfortunately, we don't have unlimited time, but thank you so much for joining me today and for this terrific conversation.
ELBAUM: Thank you. It was great to meet you, Mark. And thanks for inviting us here.
HOROWITZ: It was great talking with you Sebastian, and thanks for inviting me, Martin.
GILES: For resources used in this episode and more information, visit CFR.org/TheInterconnect and take a look at the show notes. If you ever have any questions or suggestions, connect with us at [email protected]. And to read the new 2025 Stanford Emerging Technology Review visit SETR.Stanford.edu. That's S-E-T-R.stanford.edu.
The Interconnect is a production of the Council on Foreign Relations and the Stanford Emerging Technology Review from the Hoover Institution and the Stanford School of Engineering. The opinions expressed on the show are solely those of the guests, not of CFR, which takes no institutional positions on matters of policy. Nor do they reflect the opinions of the Hoover Institution or of Stanford School of Engineering.
This episode was produced by Gabrielle Sierra, Molly McAnany, Shana Farley, Malaysia Atwater. Our audio producer is Markus Zakaria. Special thanks to our recording engineers Lori Becker and Bryan Medives. You can subscribe to the show on Apple Podcasts, Spotify, YouTube or wherever you get your audio. For The Interconnect this is Martin Giles. Thanks for listening.
Show Notes
Microchips power everything from smartphones to artificial intelligence systems and advanced weapons—and access to the most advanced chips has become a source of growing geopolitical tension between China and the United States and its allies.
In this episode of The Interconnect, Stanford Emerging Technology Review Faculty Council member Mark Horowitz and Sebastian Elbaum, the Technologist in Residence at the Council on Foreign Relations, discuss how key trends in the chip industry are shaping the future of computing, the effectiveness of export controls that restrict international sales, and the issues policymakers need to focus on to bolster the U.S. domestic chip supply chain.
Read the 2025 Stanford Emerging Technology Review at https://setr.stanford.edu/
Podcast with Martin Giles, Luciana L. Borio and Drew Endy March 27, 2025 The Interconnect
Podcast with Martin Giles, Esther Brimmer and Simone D’Amico March 13, 2025 The Interconnect
Podcast with Martin Giles, Kevin Weil and Allison Okamura February 27, 2025 The Interconnect